Balancing Explicability and Explanation in Human-Aware Planning
نویسندگان
چکیده
Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process. This can involve generating plans that are explicable (Zhang et al. 2017) to a human observer as well as the ability to provide explanations (Chakraborti et al. 2017) when such plans cannot be generated. This has led to the notion “multi-model planning” which aim to incorporate effects of human expectation in the deliberative process of a planner either in the form of explicable task planning or explanations produced thereof. In this paper, we bring these two concepts together and show how a planner can account for both these needs and achieve a tradeoff during the plan generation process itself by means of a model-space search method MEGA. This in effect provides a comprehensive perspective of what it means for a decision making agent to be “human-aware” by bringing together existing principles of planning under the umbrella of a single plan generation process. We situate our discussion specifically keeping in mind the recent work on explicable planning and explanation generation, and illustrate these concepts in modified versions of two well known planning domains, as well as a demonstration on a robot involved in a typical search and reconnaissance task with an external supervisor. It is often useful for a planner while interacting with a human in the loop to use, in the process of its deliberation, not only the model M of the task it has on its own, but also the modelMh that the human thinks it has (refer to Figure 1). This is, in essence, the fundamental thesis of the recent works on plan explanations (Chakraborti et al. 2017) and explicable planning (Zhang et al. 2017), summarized under the umbrella of multi-model planning, and is in addition to the originally studied human-aware planning problems where actions of the human (and hence the actual human model and the robot’s belief of it) are also involved in the planning process. The need for explicable planning or plan explanations in fact occur when these two models –M andMh – diverge. This means that the optimal plans in the respective models – π∗ MR and π ∗ Mh – may not be the same and hence optimal behavior of the robot in its own model is inexplicable to the human in the loop. In the explicable planning process, the robot produces a plan π̂ that is closer to the human’s expected plan, i.e. π̂ ≈ π∗ Mh . In the explanation pro∗Authors marked with asterix contributed equally. Figure 1: The planner accounts for the human’s model of itself in addition to its own model – it can can either choose to bring the human’s model closer to the ground truth using explanations via a process called model reconciliation so that an otherwise inexplicable plan now makes sense in the human’s updated model and/or it can produce explicable plans which are closer to the human’s expectation of optimality. cess, the robot instead updates the human to an intermediate model M̂h in which the robot’s original plan is also optimal and hence explicable, i.e. π∗ MR = π ∗ M̂h .1 However, until now, these two processes of plan explanations and explicability, even though acknowledged in the cited work as being complimentary, have remained separate in so far as their role in an agent’s deliberative process is considered i.e. a planner either generates an explicable plan Note that here the equality operator on plans is somewhat overloaded to mean either exact equality or, in general, equivalence with respect to a metric such as cost or similarity. ar X iv :1 70 8. 00 54 3v 1 [ cs .A I] 1 A ug 2 01 7 to the best of its ability or it produces explanations of its plans where they required. This is not always desirable if either the expected human plan is too costly in the planner’s model or the cost of communication overhead for explanations is too high – instead there may be situations where a combination of both provide a much better course of action. This is the focus of the current paper where we try to attain the sweet spot between explanations and explicability. From the perspective of design of autonomy, this has two important implications (1) as mentioned before, an agent can now not only explain but also plan in the multi-model setting with the trade-off between compromise on its optimality and possible explanations in mind; and (2) the argumentation process is known to be a crucial function of the reasoning capabilities of humans (Mercier and Sperber 2010), and now by extension of planners (or robots as an embodiment of it) as well, as a result of algorithms we discuss here which aim to incorporate the explanation generation process into the decision making process of an agent itself.
منابع مشابه
Balancing Explicability and Explanations
Human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop during its decision process. This can involve generating plans that are explicable (Zhang et al. 2017) to a human observer as well as the ability to provide explanations (Chakraborti et al. 2017) when such plans cannot be generated. This has led to the notion “multi-model...
متن کاملPlan Explicability for Robot Task Planning
A desirable capability of agents is to respond to goaloriented commands by autonomously constructing task plans. However, such autonomy can add significant cognitive load and potentially introduce safety risks to humans when agents behave unexpectedly. Hence, one important requirement is for such agents to synthesize plans that can be easily understood by humans. While there exists previous wor...
متن کاملExplicable Robot Planning as Minimizing Distance from Expected Behavior
In order for robots to be integrated effectively into human work-flows, it is not enough to address the question of autonomy but also how their actions or plans are being perceived by their human counterparts. When robots generate task plans without such considerations, they may often demonstrate what we refer to as inexplicable behavior from the point of view of humans who may be observing it....
متن کاملExplanation of Water Indices and Criteria in the Urban Landscape in Order to Improve the Quality of Urban Environments
Water as one of the most essential natural elements of landscape and consequently urban landscape from yesterday till today has had a great impact on the formation and effectiveness of urban spaces and its various effects have always stimulated human tendency towards beauty. It should be noted that the existential values of water are not merely aesthetic, rather it goes back to the deepest an...
متن کاملIntersectoral Planning for Public Health: Dilemmas and Challenges
Background Intersectoral action is often presented as essential in the promotion of population health and health equity. In Norway, national public health policies are based on the Health in All Policies (HiAP) approach that promotes whole-of-government responsibility. As part of the promotion of this intersectoral responsibility, p...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1708.00543 شماره
صفحات -
تاریخ انتشار 2017